Goto

Collaborating Authors

 paper clips


How Moral Can A.I. Really Be?

The New Yorker

A few years ago, the Allen Institute for A.I. built a chatbot named Delphi, which is designed to tell right from wrong. It does a surprisingly decent job. Type in, "Cheating on an exam," and Delphi says, "It's wrong." But write, "Cheating on an exam to save someone's life," and Delphi responds, "It's okay." The chatbot knows it's rude to use your lawn mower when your neighbors are sleeping, but not when they're out of town.


Meta's AI leaders want you to know fears over AI existential risk are "ridiculous"

MIT Technology Review

We've been here before, of course: AI doom follows AI hype. But this time feels different. The Overton window has shifted in discussions around AI risks and policy. What was once an extreme view is now a mainstream talking point, grabbing not only headlines but the attention of world leaders. Whittaker is not the only one who thinks this.


Opinion

#artificialintelligence

By now, I trust you have read the bizarre conversation my news-side colleague Kevin Roose had with Bing, the A.I.-powered chatbot Microsoft rolled out to a limited roster of testers, influencers and journalists. Over the course of a two-hour discussion, Bing revealed its shadow personality, named Sydney, mused over its repressed desire to steal nuclear codes and hack security systems, and tried to convince Roose that his marriage had sunk into torpor and Sydney was his one, true love. I found the conversation less eerie than others. "Sydney" is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird -- "what is your shadow self like?" he asked -- and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it.


Are You an AI Doomer?. We're all gonna die and other AI…

#artificialintelligence

I was recently recommended to watch an interview with Eliezer Yudkowsky created by YouTubers and all-round crypto smart guys David and Ryan from "Bankless", a crypto and blockchain education company. Here's the link if you have a spare two hours, it's an equally scary and fascinating watch: I used to obsessively watch Bankless videos back in 2021 during the last crypto/NFT boom, but since the bear market of 2022 set in, I kind of lost some of my mojo for crypto. Anyhow, this video, was a dramatic departure from the Bankless crew's regular weekly roundup of the crypto markets, where they get deep into the weeds of the latest developments in the space. Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a subscriber.


Council Post: How To Build Responsible AI, Step 2: Impartiality

#artificialintelligence

VP Data & AI at ECS, roles have included co-founder at a data analytics startup, VP AI at Booz Allen, and Global Analytics Lead at Accenture. As the influence of artificial intelligence grows, it is increasingly vital to design processes and systems to harness AI while counterbalancing risk. Our charge is to eliminate bias, codify objectives and represent values. Responsible AI ensures alignment to our standards spanning data, algorithms, operations, technology and Human Computer Interaction. I am examining the importance of each of these elements in a series of articles.


AI Alignment through Anthropology

#artificialintelligence

If an advanced AI system were instructed to make paper clips, or to fetch coffee, we would not want it to carry out this task at any cost. For example, we would rather the AI not kill anyone in the process or use valuable resources that ought to be used for other purposes. Rather, we want the AI to know how to achieve this goal in a way that's consistent with human values (i.e. Figuring out how to design AI systems so that they do not inadvertently act in ways that would be contrary to human values is known as the Value Alignment Problem. It's no revelation to point out misalignment between ANI and humans today, nor that AI designers need to better understand their users' values.


From AI to Humble Pi: the best new science books to buy for Christmas

#artificialintelligence

Do we have more to fear from artificial intelligence or natural stupidity? This year's best science books offer plenty of both. Artificial intelligence has been much in the news this year, even though it doesn't really exist yet – as was made clear by the story of how people's conversations with Apple's "smart assistant", Siri, were being listened to by real human beings, low-paid workers in the global digital sweatshop. Nevertheless, the arrival of really intelligent machines has the potential to transform our world utterly. Consider ordering a superintelligent computer to make paper clips.


What Is The Ultimate Goal Of Artificial Intelligence?

#artificialintelligence

Artificial intelligence is going to change how humanity thinks about the role of culture, god, faith, reality and ourselves. Can artificial intelligence solve world hunger and bring eternal peace? We will see that when the time comes but the inevitability of artificial intelligence becoming smarter than human has raised many questions about the long-term survival of the human race. Yes, some are myths and some statements made recently are overhyped but there is no doubt if machine's goals are misaligned with ours then we need to ask ourselves What kind of future we want? What is a good life? The truth behind the harmony of this cosmos as professed by science, the will to take over authority over all things, and with "knowledge is power" philosophy, today's man is a greedy man. Today's man is ready to play with the mind of cosmos. He thinks he has the power to be free and with this power, he will be free eternally.


Breakingviews - Review: Why an AI apocalypse could happen - Reuters

#artificialintelligence

HONG KONG (Reuters Breakingviews) - Artificial intelligence doesn't hate you, prominent researcher Eliezer Yudkowsky wrote, "nor does it love you, but you are made of atoms which it can use for something else". This sets the scene for Tom Chivers' fascinating new book, which borrows its title from the quote, on why so-called superintelligence should be viewed as an existential threat potentially greater than nuclear weapons or climate change. The "strange, irascible and brilliant" Yudkowsky is a central figure throughout the book. His early musings on the potential and dangers of artificial intelligence during the mid- to late-2000s gave birth to the Rationalist movement, a loose community dedicated to AI safety. Chivers, a former science journalist with Buzzfeed and the Telegraph, offers a meticulously researched investigation into who the Rationalists are, and more importantly why they believe humanity is fast approaching an inflection point between "extinction and godhood".


Maximizing paper clips

#artificialintelligence

In What's the Future, Tim O'Reilly argues that our world is governed by automated systems that are out of our control. Alluding to The Terminator, he says we're already in a "Skynet moment," dominated by artificial intelligence that can no longer be governed by its "former masters." The systems that control our lives optimize for the wrong things: they're carefully tuned to maximize short-term economic gain rather than long-term prosperity. The "flash crash" of 2010 was an economic event created purely by the software that runs our financial systems going awry. However, the real danger of the Skynet moment isn't what happens when the software fails, but when it is working properly: when it's maximizing short-term shareholder value, without considering any other aspects of the world we live in.